• This article delves into the complex international efforts to regulate AI, regarded as one of the most potent and risky technologies in modern times.

    Thursday, April 4, 2024
  • California Governor Gavin Newsom has vetoed SB 1047, a significant bill aimed at regulating artificial intelligence development. Authored by State Senator Scott Wiener, the bill sought to impose liability on companies developing AI models, requiring them to implement safety protocols to mitigate potential "critical harms." The regulations would have specifically targeted models with a training cost of at least $100 million and those utilizing 10^26 FLOPS (floating point operations), a measure of computational power. The bill faced considerable opposition from various stakeholders in Silicon Valley, including prominent companies like OpenAI and influential technologists such as Yann LeCun, Meta's chief AI scientist. Even some Democratic politicians, including U.S. Congressman Ro Khanna, expressed their disapproval. Despite amendments made to the bill in response to feedback from AI companies like Anthropic, concerns remained about its implications. Newsom had previously indicated his reservations about the bill, and upon announcing the veto, he articulated that while the bill was well-intentioned, it failed to consider the context in which AI systems are deployed, particularly in high-risk environments or when sensitive data is involved. He criticized the bill for applying stringent standards to basic functions of large systems, arguing that this approach would not effectively protect the public from genuine threats posed by AI technology. Nancy Pelosi, a long-time Congresswoman and former House Speaker, also criticized SB 1047, labeling it as "well-intentioned but ill-informed." Following the veto, she commended Newsom for recognizing the need to empower smaller entrepreneurs and academic institutions rather than allowing large tech companies to dominate the field. In conjunction with the veto, Newsom's office highlighted his recent efforts to regulate AI technology, noting that he had signed 17 bills related to AI regulation in the past month. He has sought the expertise of notable figures in the field, such as Fei-Fei Li, Tino Cuéllar, and Jennifer Tour Chayes, to help California establish effective guidelines for the deployment of generative AI. Fei-Fei Li, often referred to as the "godmother of AI," had previously warned that SB 1047 could harm California's emerging AI ecosystem. In response to the veto, Senator Wiener expressed disappointment, describing it as a setback for those advocating for oversight of large corporations that make critical decisions impacting public safety and welfare. He asserted that the discussions surrounding the bill had significantly advanced the global conversation on AI safety.

  • Rohit Krishnan's article, "What Comes After?" discusses the recent veto by California Governor Gavin Newsom of a bill aimed at regulating artificial intelligence (AI) models, known as SB 1047. The bill had garnered significant support in the California Assembly but faced opposition that questioned its effectiveness and the evidence behind its proposed regulations. Governor Newsom's veto highlights concerns about the bill's focus on large-scale AI models, suggesting that it could create a false sense of security while neglecting the potential risks posed by smaller, specialized models. He emphasizes the need for a regulatory framework that is adaptable and evidence-based, particularly as AI technology continues to evolve rapidly. The governor's statement indicates a commitment to collaborating with leading experts in the field to develop more effective guardrails for AI deployment, aiming for a science-based approach to understanding the capabilities and risks associated with frontier models. The article reflects on the broader implications of this veto, suggesting that the debate over AI regulation is far from settled and will likely continue to evolve. Krishnan points out that while the bill aimed to address existential risks and potential misuse of AI, it ultimately lacked concrete evidence to support its more extreme claims. He argues that regulations should be grounded in a clear understanding of the technology and its implications, rather than speculative fears. Krishnan proposes that future regulations should prioritize human flourishing and innovation, ensuring that the benefits of AI can be harnessed while minimizing unnecessary restrictions. He advocates for a balanced approach that allows for the exploration of AI's potential while maintaining safety, particularly in high-risk applications. The article concludes with a call for thoughtful regulation that is informed by evidence and focused on the practical implications of AI technology, rather than being driven by fear or speculation. Overall, the piece underscores the complexity of regulating a rapidly advancing technology like AI and the need for a nuanced approach that considers both the risks and the transformative potential of AI innovations.

  • Rohit Krishnan's article, "What Comes After?" discusses the recent veto by California Governor Gavin Newsom of a bill aimed at regulating artificial intelligence (AI) models, known as SB 1047. The bill had garnered significant support in the California Assembly but faced opposition that questioned its effectiveness and the evidence backing its proposed regulations. Governor Newsom's veto highlights concerns about the bill's focus on large-scale AI models, suggesting that it could create a false sense of security while potentially overlooking the risks posed by smaller, specialized models. He emphasizes the need for a regulatory framework that is adaptable and considers the context in which AI systems are deployed, particularly in high-risk environments. The governor acknowledges the importance of taking action to protect the public from potential threats posed by AI technology but argues that the approach taken by SB 1047 was not the most effective. In his statement, Newsom calls for collaboration with leading experts in the field of generative AI to develop evidence-based regulations. This initiative aims to create a more informed understanding of the capabilities and risks associated with frontier AI models. The governor's commitment to working with experts like Dr. Fei-Fei Li and others indicates a shift towards a more empirical approach to AI regulation. Krishnan reflects on the broader implications of the veto, suggesting that the debate over AI regulation is likely to continue and evolve. He points out that while SB 1047 aimed to address existential risks and large-scale threats, the lack of concrete evidence for such risks complicates the regulatory landscape. The article critiques the notion of passing minimally restrictive regulations without a clear understanding of their benefits, arguing that regulations should be grounded in evidence and focused on promoting human flourishing. The author proposes several principles for future AI regulations, emphasizing the importance of understanding the technology, solving user problems, and maintaining minimal restrictions to foster innovation. He advocates for a regulatory approach that prioritizes the potential benefits of AI while being cautious about imposing unnecessary bureaucratic hurdles. Krishnan concludes by acknowledging the challenges policymakers face in navigating the rapidly evolving AI landscape. He stresses the need for a balanced approach that allows for innovation while addressing legitimate concerns about safety and ethical implications. The article serves as a call for thoughtful, evidence-based regulation that can adapt to the complexities of AI technology and its impact on society.